Robust Variable Selection and Regularization in Quantile Regression Based on Adaptive-LASSO and Adaptive E-NET
نویسندگان
چکیده
Although the variable selection and regularization procedures have been extensively considered in literature for quantile regression (QR) scenario via penalization, many such fail to deal with data aberrations design space, namely, high leverage points (X-space outliers) collinearity challenges simultaneously. Some referred as influential observations tend adversely alter eigenstructure of matrix by inducing or masking collinearity. Therefore, literature, it is recommended that problems should be dealt In this article, we suggest adaptive LASSO E-NET penalized QR (QR-ALASSO QR-AE-NET) where weights are based on a estimator remedies. We extend methodology their weighted versions WQR-LASSO, WQR-E-NET had suggested earlier. RIDGE (RR) parameter estimator. use may plausible at ℓ1 (QR τ=0.5) symmetrical distribution, not so extreme levels. QR-based derive weights. carried out comparative study QR-LASSO, QR-E-NET, ones here, viz., QR-ALASSO, QR-AE-NET, QRALASSO AE-NET (WQR-ALASSO WQR-AE-NET) procedures. The simulation results show WQR-ALASSO WQR-AE-NET generally outperform nonadaptive counterparts. At predictor matrices under normality, QR-ALASSO respectively, non-adaptive unweighted scenarios, follows: all 16 cases (100%) respect correctly selected (shrunk) zero coefficients; 88% fitted models; 81% prediction. WQR 75% time both models shrunk coefficients 63% scenarios prediction, 100% time; 50% (while equally); time. scenario, respective follows; models, while t-distribution, QR-AE-NET each; Additionally, result former outperforming latter prediction whilst there no clear "winner" other two measures. Overall, outperforms collinearity-masking outperformed metrics. dominate 38% 62% coefficients. vice-versa cases. heavy-tailed distributions (t-distributions d∈(1;6) degrees freedom) dominance over again evident. they perform better time, coefficients, cases; do Results from applications real life sets more less line studies results.
منابع مشابه
Bayesian Quantile Regression with Adaptive Lasso Penalty for Dynamic Panel Data
Dynamic panel data models include the important part of medicine, social and economic studies. Existence of the lagged dependent variable as an explanatory variable is a sensible trait of these models. The estimation problem of these models arises from the correlation between the lagged depended variable and the current disturbance. Recently, quantile regression to analyze dynamic pa...
متن کاملAdaptive Robust Variable Selection.
Heavy-tailed high-dimensional data are commonly encountered in various scientific fields and pose great challenges to modern statistical analysis. A natural procedure to address this problem is to use penalized quantile regression with weighted L1-penalty, called weighted robust Lasso (WR-Lasso), in which weights are introduced to ameliorate the bias problem induced by the L1-penalty. In the ul...
متن کاملBayesian Quantile Regression with Adaptive Elastic Net Penalty for Longitudinal Data
Longitudinal studies include the important parts of epidemiological surveys, clinical trials and social studies. In longitudinal studies, measurement of the responses is conducted repeatedly through time. Often, the main goal is to characterize the change in responses over time and the factors that influence the change. Recently, to analyze this kind of data, quantile regression has been taken ...
متن کاملVariable Selection in Quantile Regression
After its inception in Koenker and Bassett (1978), quantile regression has become an important and widely used technique to study the whole conditional distribution of a response variable and grown into an important tool of applied statistics over the last three decades. In this work, we focus on the variable selection aspect of penalized quantile regression. Under some mild conditions, we demo...
متن کاملRobust Regression through the Huber’s criterion and adaptive lasso penalty
The Huber’s Criterion is a useful method for robust regression. The adaptive least absolute shrinkage and selection operator (lasso) is a popular technique for simultaneous estimation and variable selection. The adaptive weights in the adaptive lasso allow to have the oracle properties. In this paper we propose to combine the Huber’s criterion and adaptive penalty as lasso. This regression tech...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Computation (Basel)
سال: 2022
ISSN: ['2079-3197']
DOI: https://doi.org/10.3390/computation10110203